weight transport
- North America > Canada > Ontario > Toronto (0.15)
- Asia > Middle East > Jordan (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
Deep Learning without Weight Transport
Current algorithms for deep learning probably cannot run in the brain because they rely on weight transport, where forward-path neurons transmit their synaptic weights to a feedback path, in a way that is likely impossible biologically. An algorithm called feedback alignment achieves deep learning without weight transport by using random feedback weights, but it performs poorly on hard visual-recognition tasks. Here we describe two mechanisms -- a neural circuit called a weight mirror and a modification of an algorithm proposed by Kolen and Pollack in 1994 -- both of which let the feedback path learn appropriate synaptic weights quickly and accurately even in large networks, without weight transport or complex wiring. Tested on the ImageNet visual-recognition task, these mechanisms outperform both feedback alignment and the newer sign-symmetry method, and nearly match backprop, the standard algorithm of deep learning, which uses weight transport.
Pretraining with Random Noise for Fast and Robust Learning without Weight Transport
The brain prepares for learning even before interacting with the environment, by refining and optimizing its structures through spontaneous neural activity that resembles random noise. However, the mechanism of such a process has yet to be understood, and it is unclear whether this process can benefit the algorithm of machine learning. Here, we study this issue using a neural network with a feedback alignment algorithm, demonstrating that pretraining neural networks with random noise increases the learning efficiency as well as generalization abilities without weight transport. First, we found that random noise training modifies forward weights to match backward synaptic feedback, which is necessary for teaching errors by feedback alignment. As a result, a network with pre-aligned weights learns notably faster and reaches higher accuracy than a network without random noise training, even comparable to the backpropagation algorithm.
- North America > Canada > Ontario > Toronto (0.14)
- Asia > South Korea > Daejeon > Daejeon (0.04)
- Asia > Middle East > Jordan (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- North America > Canada > Ontario > Toronto (0.14)
- Asia (0.14)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Energy > Oil & Gas (0.48)
- Information Technology (0.46)
Deep Learning without Weight Transport
Mohamed Akrout, Collin Wilson, Peter Humphreys, Timothy Lillicrap, Douglas B. Tweed
In a typical deep-learning network, some signals flow along a forward path through multiple layers of processing units from the input layer to the output, while other signals flow back from the output layer along a feedback path . Forward-path signals perform inference (e.g. they try to infer what objects are
- North America > Canada > Ontario > Toronto (0.15)
- Asia > Middle East > Jordan (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- Energy > Oil & Gas (0.48)
- Information Technology (0.46)
Deep Learning without Weight Transport
Current algorithms for deep learning probably cannot run in the brain because they rely on weight transport, where forward-path neurons transmit their synaptic weights to a feedback path, in a way that is likely impossible biologically. An algorithm called feedback alignment achieves deep learning without weight transport by using random feedback weights, but it performs poorly on hard visual-recognition tasks. Here we describe two mechanisms -- a neural circuit called a weight mirror and a modification of an algorithm proposed by Kolen and Pollack in 1994 -- both of which let the feedback path learn appropriate synaptic weights quickly and accurately even in large networks, without weight transport or complex wiring. Tested on the ImageNet visual-recognition task, these mechanisms outperform both feedback alignment and the newer sign-symmetry method, and nearly match backprop, the standard algorithm of deep learning, which uses weight transport.
Pretraining with Random Noise for Fast and Robust Learning without Weight Transport
The brain prepares for learning even before interacting with the environment, by refining and optimizing its structures through spontaneous neural activity that resembles random noise. However, the mechanism of such a process has yet to be understood, and it is unclear whether this process can benefit the algorithm of machine learning. Here, we study this issue using a neural network with a feedback alignment algorithm, demonstrating that pretraining neural networks with random noise increases the learning efficiency as well as generalization abilities without weight transport. First, we found that random noise training modifies forward weights to match backward synaptic feedback, which is necessary for teaching errors by feedback alignment. As a result, a network with pre-aligned weights learns notably faster and reaches higher accuracy than a network without random noise training, even comparable to the backpropagation algorithm.